On the False Discovery Rate and Expected Type I Errors

2001 ◽  
Vol 43 (8) ◽  
pp. 985 ◽  
Author(s):  
Helmut Finner ◽  
M. Roters
2021 ◽  
Author(s):  
Ye Yue ◽  
Yijuan Hu

Abstract Background: Understanding whether and which microbes played a mediating role between an exposure and a disease outcome are essential for researchers to develop clinical interventions to treat the disease by modulating the microbes. Existing methods for mediation analysis of the microbiome are often limited to a global test of community-level mediation or selection of mediating microbes without control of the false discovery rate (FDR). Further, while the null hypothesis of no mediation at each microbe is a composite null that consists of three types of null (no exposure-microbe association, no microbe-outcome association given the exposure, or neither), most existing methods for the global test such as MedTest and MODIMA treat the microbes as if they are all under the same type of null. Results: We propose a new approach based on inverse regression that regresses the (possibly transformed) relative abundance of each taxon on the exposure and the exposure-adjusted outcome to assess the exposure-taxon and taxon-outcome associations simultaneously. Then the association p-values are used to test mediation at both the community and individual taxon levels. This approach fits nicely into our Linear Decomposition Model (LDM) framework, so our new method is implemented in the LDM and enjoys all the features of the LDM, i.e., allowing an arbitrary number of taxa to be tested, supporting continuous, discrete, or multivariate exposures and outcomes as well as adjustment of confounding covariates, accommodating clustered data, and offering analysis at the relative abundance or presence-absence scale. We refer to this new method as LDM-med. Using extensive simulations, we showed that LDM-med always controlled the type I error of the global test and had compelling power over existing methods; LDM-med always preserved the FDR of testing individual taxa and had much better sensitivity than alternative approaches. In contrast, MedTest and MODIMA had severely inflated type I error when different taxa were under different types of null. The flexibility of LDM-med for a variety of mediation analyses is illustrated by the application to a murine microbiome dataset, which identified a plausible mediator.Conclusions: Inverse regression coupled with the LDM is a strategy that performs well and is capable of handling mediation analysis in a wide variety of microbiome studies.


BMC Genetics ◽  
2005 ◽  
Vol 6 (Suppl 1) ◽  
pp. S134 ◽  
Author(s):  
Qiong Yang ◽  
Jing Cui ◽  
Irmarie Chazaro ◽  
L Adrienne Cupples ◽  
Serkalem Demissie

Genetics ◽  
1998 ◽  
Vol 150 (4) ◽  
pp. 1699-1706 ◽  
Author(s):  
Joel Ira Weller ◽  
Jiu Zhou Song ◽  
David W Heyen ◽  
Harris A Lewin ◽  
Micha Ron

Abstract Saturated genetic marker maps are being used to map individual genes affecting quantitative traits. Controlling the “experimentwise” type-I error severely lowers power to detect segregating loci. For preliminary genome scans, we propose controlling the “false discovery rate,” that is, the expected proportion of true null hypotheses within the class of rejected null hypotheses. Examples are given based on a granddaughter design analysis of dairy cattle and simulated backcross populations. By controlling the false discovery rate, power to detect true effects is not dependent on the number of tests performed. If no detectable genes are segregating, controlling the false discovery rate is equivalent to controlling the experimentwise error rate. If quantitative loci are segregating in the population, statistical power is increased as compared to control of the experimentwise type-I error. The difference between the two criteria increases with the increase in the number of false null hypotheses. The false discovery rate can be controlled at the same level whether the complete genome or only part of it has been analyzed. Additional levels of contrasts, such as multiple traits or pedigrees, can be handled without the necessity of a proportional decrease in the critical test probability.


PeerJ ◽  
2021 ◽  
Vol 9 ◽  
pp. e11907
Author(s):  
Yingli Fu ◽  
Xiaojun Ren ◽  
Wei Bai ◽  
Qiong Yu ◽  
Yaoyao Sun ◽  
...  

Background Schizophrenia is a severely multifactorial neuropsychiatric disorder, and the majority of cases are due to genetic variations. In this study, we evaluated the genetic association between the C-Maf-inducing protein (CMIP) gene and schizophrenia in the Han Chinese population. Methods In this case-control study, 761 schizophrenia patients and 775 healthy controls were recruited. Tag single-nucleotide polymorphisms (SNPs; rs12925980, rs2287112, rs3751859 and rs77700579) from the CMIP gene were genotyped via matrix-assisted laser desorption/ionization time of flight mass spectrometry. We used logistic regression to estimate the associations between the genotypes/alleles of each SNP and schizophrenia in males and females, respectively. The in-depth link between CMIP and schizophrenia was explored through linkage disequilibrium (LD) and further haplotype analyses. False discovery rate correction was utilized to control for Type I errors caused by multiple comparisons. Results There was a significant difference in rs287112 allele frequencies between female schizophrenia patients and healthy controls after adjusting for multiple comparisons (χ2 = 12.296, Padj = 0.008). Females carrying minor allele G had 4.445 times higher risk of schizophrenia compared with people who carried the T allele (OR = 4.445, 95% CI [1.788–11.046]). Linkage-disequilibrium was not observed in the subjects, and people with haplotype TTGT of rs12925980–rs2287112–rs3751859–rs77700579 had a lower risk of schizophrenia (OR = 0.42, 95% CI [0.19–0.94]) when compared with CTGA haplotypes. However, the association did not survive false discovery rate correction. Conclusion This study identified a potential CMIP variant that may confer schizophrenia risk in the female Han Chinese population.


2014 ◽  
Vol 42 (11) ◽  
pp. e95-e95 ◽  
Author(s):  
Aaron T.L. Lun ◽  
Gordon K. Smyth

Abstract A common aim in ChIP-seq experiments is to identify changes in protein binding patterns between conditions, i.e. differential binding. A number of peak- and window-based strategies have been developed to detect differential binding when the regions of interest are not known in advance. However, careful consideration of error control is needed when applying these methods. Peak-based approaches use the same data set to define peaks and to detect differential binding. Done improperly, this can result in loss of type I error control. For window-based methods, controlling the false discovery rate over all detected windows does not guarantee control across all detected regions. Misinterpreting the former as the latter can result in unexpected liberalness. Here, several solutions are presented to maintain error control for these de novo counting strategies. For peak-based methods, peak calling should be performed on pooled libraries prior to the statistical analysis. For window-based methods, a hybrid approach using Simes’ method is proposed to maintain control of the false discovery rate across regions. More generally, the relative advantages of peak- and window-based strategies are explored using a range of simulated and real data sets. Implementations of both strategies also compare favourably to existing programs for differential binding analyses.


Methodology ◽  
2015 ◽  
Vol 11 (3) ◽  
pp. 110-115 ◽  
Author(s):  
Rand R. Wilcox ◽  
Jinxia Ma

Abstract. The paper compares methods that allow both within group and between group heteroscedasticity when performing all pairwise comparisons of the least squares lines associated with J independent groups. The methods are based on simple extension of results derived by Johansen (1980) and Welch (1938) in conjunction with the HC3 and HC4 estimators. The probability of one or more Type I errors is controlled using the improvement on the Bonferroni method derived by Hochberg (1988) . Results are illustrated using data from the Well Elderly 2 study, which motivated this paper.


2020 ◽  
Vol 39 (3) ◽  
pp. 185-208
Author(s):  
Qiao Xu ◽  
Rachana Kalelkar

SUMMARY This paper examines whether inaccurate going-concern opinions negatively affect the audit office's reputation. Assuming that clients perceive the incidence of going-concern opinion errors as a systematic audit quality concern within the entire audit office, we expect these inaccuracies to impact the audit office market share and dismissal rate. We find that going-concern opinion inaccuracy is negatively associated with the audit office market share and is positively associated with the audit office dismissal rate. Furthermore, we find that the decline in market share and the increase in dismissal rate are primarily associated with Type I errors. Additional analyses reveal that the negative consequence of going-concern opinion inaccuracy is lower for Big 4 audit offices. Finally, we find that the decrease in the audit office market share is explained by the distressed clients' reactions to Type I errors and audit offices' lack of ability to attract new clients.


Sign in / Sign up

Export Citation Format

Share Document